6 research outputs found
Smoothed Efficient Algorithms and Reductions for Network Coordination Games
Worst-case hardness results for most equilibrium computation problems have
raised the need for beyond-worst-case analysis. To this end, we study the
smoothed complexity of finding pure Nash equilibria in Network Coordination
Games, a PLS-complete problem in the worst case. This is a potential game where
the sequential-better-response algorithm is known to converge to a pure NE,
albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial)
smoothed complexity when the underlying game graph is a complete (resp.
arbitrary) graph, and every player has constantly many strategies. We note that
the complete graph case is reminiscent of perturbing all parameters, a common
assumption in most known smoothed analysis results.
Second, we define a notion of smoothness-preserving reduction among search
problems, and obtain reductions from -strategy network coordination games to
local-max-cut, and from -strategy games (with arbitrary ) to
local-max-cut up to two flips. The former together with the recent result of
[BCC18] gives an alternate -time smoothed algorithm for the
-strategy case. This notion of reduction allows for the extension of
smoothed efficient algorithms from one problem to another.
For the first set of results, we develop techniques to bound the probability
that an (adversarial) better-response sequence makes slow improvements on the
potential. Our approach combines and generalizes the local-max-cut approaches
of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful
definition of the matrix which captures the increase in potential, a tighter
union bound on adversarial sequences, and balancing it with good enough rank
bounds. We believe that the approach and notions developed herein could be of
interest in addressing the smoothed complexity of other potential and/or
congestion games
Search and optimization with randomness in computational economics: equilibria, pricing, and decisions
In this thesis we study search and optimization problems from computational economics with primarily stochastic inputs. The results are grouped into two categories: First, we address the smoothed analysis of Nash equilibrium computation. Second, we address two pricing problems in mechanism design, and solve two economically motivated stochastic optimization problems.
Computing Nash equilibria is a central question in the game-theoretic study of economic systems of agent interactions. The worst-case analysis of this problem has been studied in depth, but little was known beyond the worst case. We study this problem in the framework of smoothed analysis, where adversarial inputs are randomly perturbed. We show that computing Nash equilibria is hard for 2-player games even when input perturbations are large. This is despite the existence of approximation algorithms in a similar regime. In doing so, our result disproves a conjecture relating approximation schemes to smoothed analysis. Despite the hardness results in general, we also present a special case of co-operative games, where we show that the natural greedy algorithm for finding equilibria has polynomial smoothed complexity. We also develop reductions which preserve smoothed analysis.
In the second part of the thesis, we consider optimization problems which are motivated by economic applications. We address two stochastic optimization problems. We begin by developing optimal methods to determine the best among binary classifiers, when the objective function is known only through pairwise comparisons, e.g. when the objective function is the subjective opinion of a client. Finally, we extend known algorithms in the Pandora's box problem --- a classic optimal search problem --- to an order-constrained setting which allows for richer modelling.
The remaining chapters address two pricing problems from mechanism design. First, we provide an approximately revenue-optimal pricing scheme for the problem of selling time on a server to jobs whose parameters are sampled i.i.d. from an unknown distribution. We then tackle the problem of fairly dividing chores among a collection of economic agents via a competitive equilibrium, which balances assigned tasks with payouts. We give efficient algorithms to compute such an equilibrium
Pandora's Box Problem with Order Constraints
The Pandora's Box Problem, originally formalized by Weitzman in 1979, models
selection from set of random, alternative options, when evaluation is costly.
This includes, for example, the problem of hiring a skilled worker, where only
one hire can be made, but the evaluation of each candidate is an expensive
procedure. Weitzman showed that the Pandora's Box Problem admits an elegant,
simple solution, where the options are considered in decreasing order of
reservation value,i.e., the value that reduces to zero the expected marginal
gain for opening the box. We study for the first time this problem when order -
or precedence - constraints are imposed between the boxes. We show that,
despite the difficulty of defining reservation values for the boxes which take
into account both in-depth and in-breath exploration of the various options,
greedy optimal strategies exist and can be efficiently computed for tree-like
order constraints. We also prove that finding approximately optimal adaptive
search strategies is NP-hard when certain matroid constraints are used to
further restrict the set of boxes which may be opened, or when the order
constraints are given as reachability constraints on a DAG. We complement the
above result by giving approximate adaptive search strategies based on a
connection between optimal adaptive strategies and non-adaptive strategies with
bounded adaptivity gap for a carefully relaxed version of the problem
Online Revenue Maximization for Server Pricing
Efficient and truthful mechanisms to price resources on remote
servers/machines has been the subject of much work in recent years due to the
importance of the cloud market. This paper considers revenue maximization in
the online stochastic setting with non-preemptive jobs and a unit capacity
server. One agent/job arrives at every time step, with parameters drawn from an
underlying unknown distribution.
We design a posted-price mechanism which can be efficiently computed, and is
revenue-optimal in expectation and in retrospect, up to additive error. The
prices are posted prior to learning the agent's type, and the computed pricing
scheme is deterministic, depending only on the length of the allotted time
interval and on the earliest time the server is available. If the distribution
of agent's type is only learned from observing the jobs that are executed, we
prove that a polynomial number of samples is sufficient to obtain a
near-optimal truthful pricing strategy